64 research outputs found

    The impact of imperfect weather forecasts on wind power forecasting performance: Evidence from two wind farms in Greece

    Get PDF
    Weather variables are an important driver of power generation from renewable energy sources. However, accurately predicting such variables is a challenging task, which has a significant impact on the accuracy of the power generation forecasts. In this study, we explore the impact of imperfect weather forecasts on two classes of forecasting methods (statistical and machine learning) for the case of wind power generation. We perform a stress test analysis to measure the robustness of different methods on the imperfect weather input, focusing on both the point forecasts and the 95% prediction intervals. The results indicate that different methods should be considered according to the uncertainty characterizing the weather forecasts

    Model combinations through revised base-rates

    Get PDF
    Standard selection criteria for forecasting models focus on information that is calculated for each series independently, disregarding the general tendencies and performance of the candidate models. In this paper, we propose a new way to perform statistical model selection and model combination that incorporates the base rates of the candidate forecasting models, which are then revised so that the per-series information is taken into account. We examine two schemes that are based on the precision and sensitivity information from the contingency table of the base rates. We apply our approach on pools of either exponential smoothing or ARMA models, considering both simulated and real time series, and show that our schemes work better than standard statistical benchmarks. We test the significance and sensitivity of our results, discuss the connection of our approach to other cross-learning approaches, and offer insights regarding implications for theory and practice.<br/

    On the selection of forecasting accuracy measures

    Get PDF
    A lot of controversy exists around the choice of the most appropriate error measure for assessing the performance of forecasting methods. While statisticians argue for the use of measures with good statistical properties, practitioners prefer measures that are easy to communicate and understand. Moreover, researchers argue that the loss-function for parameterizing a model should be aligned with how the post-performance measurement is made. In this paper we ask: Does it matter? Will the relative ranking of the forecasting methods change significantly if we choose one measure over another? Will a mismatch of the in-sample loss-function and the out-of-sample performance measure decrease the performance of the forecasting models? Focusing on the average ranked point forecast accuracy, we review the most commonly-used measures in both the academia and practice and perform a large-scale empirical study to understand the importance of the choice between measures. Our results suggest that there are only small discrepancies between the different error measures, especially within each measure category (percentage, relative, or scaled)

    The future of forecasting competitions: Design attributes and principles

    Get PDF

    On the selection of forecasting accuracy measures

    Get PDF
    A lot of controversy exists around the choice of the most appropriate error measure for assessing the performance of forecasting methods. While statisticians argue for the use of measures with good statistical properties, practitioners prefer measures that are easy to communicate and understand. Moreover, researchers argue that the loss-function for parameterizing a model should be aligned with how the post-performance measurement is made. In this paper we ask: Does it matter? Will the relative ranking of the forecasting methods change significantly if we choose one measure over another? Will a mismatch of the in-sample loss-function and the out-of-sample performance measure decrease the performance of the forecasting models? Focusing on the average ranked point forecast accuracy, we review the most commonly-used measures in both the academia and practice and perform a large-scale empirical study to understand the importance of the choice between measures. Our results suggest that there are only small discrepancies between the different error measures, especially within each measure category (percentage, relative, or scaled)

    Cross-temporal aggregation:Improving the forecast accuracy of hierarchical electricity consumption

    Get PDF
    Achieving high accuracy in energy consumption forecasting is critical for improving energy management and planning. However, this requires the selection of appropriate forecasting models, able to capture the individual characteristics of the series to be predicted, which is a task that involves a lot of system and region level, not only the model selection problem is expanded to multiple time series, but we also require aggregation consistency of the forecasts across levels. Although hierarchical forecasting, such as the bottom-up, the top-down, and the optimal reconciliation methods, can address the aggregation consistency concerns, it does not resolve the model selection uncertainty. To address this issue, we rely on Multiple Temporal Aggregation (MTA), which has been shown to mitigate the model selection problem for low-frequency time series. We propose a modification of the Multiple Aggregation Prediction Algorithm, a special implementation of MTA, for high-frequency time series to better handle the undesirable effect of seasonality shrinkage that MTA implies and combine it with conventional cross-sectional hierarchical forecasting. The impact of incorporating temporal aggregation in hierarchical forecasting is empirically assessed using a real data set from five bank branches. We show that the proposed MTA approach, combined with the optimal reconciliation method, demonstrates superior accuracy, aggregation consistency, and reliable automatic forecasting.</p

    The future of forecasting competitions: Design attributes and principles

    Get PDF
    Forecasting competitions are the equivalent of laboratory experimentation widely used in physical and life sciences. They provide useful, objective information to improve the theory and practice of forecasting, advancing the field, expanding its usage, and enhancing its value to decision and policymakers. We describe 10 design attributes to be considered when organizing forecasting competitions, taking into account trade-offs between optimal choices and practical concerns, such as costs, as well as the time and effort required to participate in them. Consequently, we map all major past competitions in respect to their design attributes, identifying similarities and differences between them, as well as design gaps, and making suggestions about the principles to be included in future competitions, putting a particular emphasis on learning as much as possible from their implementation in order to help improve forecasting accuracy and uncertainty. We discuss that the task of forecasting often presents a multitude of challenges that can be difficult to capture in a single forecasting contest. To assess the caliber of a forecaster, we, therefore, propose that organizers of future competitions consider a multicontest approach. We suggest the idea of a forecasting-“athlon” in which different challenges of varying characteristics take place
    • …
    corecore